146 research outputs found

    A systematic review of technology-enhanced L2 listening development since 2000

    Get PDF
    Since 2000, technology-enhanced L2 listening development (TELD) has been increasingly investigated. However, systematic reviews concerning the technologies, learning tasks, and outcomes of TELD remain limited. To fill this gap, we conducted a systematic review of publications from 2000 to 2022 on TELD from the perspectives of technologies, learning tasks, and learning outcomes. Forty-six articles from Web of Science were screened by predefined criteria and analysed based on a step-by-step procedure using the PRISMA framework. The findings revealed 13 types of technology and 19 learning tasks useful for TELD. TELD was effective both in terms of building listening skills and enhancing learner emotions. The studies showed that TELD supported learner interactions, encouraged active engagement, and augmented various learning tasks. Based on the findings, we developed a TELD model consisting of two parts: “Within cognitive systems,” in which learners deal with cognitive schemata, listening strategy application, and listening practice via solid attention; “outside of cognitive systems,” in which TELD can construct and reconstruct cognitive schemata, support listening practices, encourage and guide listening strategy application, and improve learner emotions and attention by providing learning materials and activities based on listening-related knowledge, listening exercises with feedback, prompts and feedback on listening strategy application, and a sense of enjoyment and comfort

    Large-Scale Multi-Label Learning with Incomplete Label Assignments

    Full text link
    Multi-label learning deals with the classification problems where each instance can be assigned with multiple labels simultaneously. Conventional multi-label learning approaches mainly focus on exploiting label correlations. It is usually assumed, explicitly or implicitly, that the label sets for training instances are fully labeled without any missing labels. However, in many real-world multi-label datasets, the label assignments for training instances can be incomplete. Some ground-truth labels can be missed by the labeler from the label set. This problem is especially typical when the number instances is very large, and the labeling cost is very high, which makes it almost impossible to get a fully labeled training set. In this paper, we study the problem of large-scale multi-label learning with incomplete label assignments. We propose an approach, called MPU, based upon positive and unlabeled stochastic gradient descent and stacked models. Unlike prior works, our method can effectively and efficiently consider missing labels and label correlations simultaneously, and is very scalable, that has linear time complexities over the size of the data. Extensive experiments on two real-world multi-label datasets show that our MPU model consistently outperform other commonly-used baselines

    IDET: Iterative Difference-Enhanced Transformers for High-Quality Change Detection

    Full text link
    Change detection (CD) aims to detect change regions within an image pair captured at different times, playing a significant role for diverse real-world applications. Nevertheless, most of existing works focus on designing advanced network architectures to map the feature difference to the final change map while ignoring the influence of the quality of the feature difference. In this paper, we study the CD from a new perspective, i.e., how to optimize the feature difference to highlight changes and suppress unchanged regions, and propose a novel module denoted as iterative difference-enhanced transformers (IDET). IDET contains three transformers: two transformers for extracting the long-range information of the two images and one transformer for enhancing the feature difference. In contrast to the previous transformers, the third transformer takes the outputs of the first two transformers to guide the enhancement of the feature difference iteratively. To achieve more effective refinement, we further propose the multi-scale IDET-based change detection that uses multi-scale representations of the images for multiple feature difference refinements and proposes a coarse-to-fine fusion strategy to combine all refinements. Our final CD method outperforms seven state-of-the-art methods on six large-scale datasets under diverse application scenarios, which demonstrates the importance of feature difference enhancements and the effectiveness of IDET.Comment: conferenc

    Background-Mixed Augmentation for Weakly Supervised Change Detection

    Full text link
    Change detection (CD) is to decouple object changes (i.e., object missing or appearing) from background changes (i.e., environment variations) like light and season variations in two images captured in the same scene over a long time span, presenting critical applications in disaster management, urban development, etc. In particular, the endless patterns of background changes require detectors to have a high generalization against unseen environment variations, making this task significantly challenging. Recent deep learning-based methods develop novel network architectures or optimization strategies with paired-training examples, which do not handle the generalization issue explicitly and require huge manual pixel-level annotation efforts. In this work, for the first attempt in the CD community, we study the generalization issue of CD from the perspective of data augmentation and develop a novel weakly supervised training algorithm that only needs image-level labels. Different from general augmentation techniques for classification, we propose the background-mixed augmentation that is specifically designed for change detection by augmenting examples under the guidance of a set of background-changing images and letting deep CD models see diverse environment variations. Moreover, we propose the augmented & real data consistency loss that encourages the generalization increase significantly. Our method as a general framework can enhance a wide range of existing deep learning-based detectors. We conduct extensive experiments in two public datasets and enhance four state-of-the-art methods, demonstrating the advantages of our method. We release the code at https://github.com/tsingqguo/bgmix.Comment: AAAI 2023 Accepte
    corecore